Goto

Collaborating Authors

 Vestland



DeformAr: Rethinking NER Evaluation through Component Analysis and Visual Analytics

Younes, Ahmed Mustafa

arXiv.org Artificial Intelligence

Transformer models have significantly advanced Natural Language Processing (NLP), demonstrating strong performance in English. However, their effectiveness in Arabic, particularly for Named Entity Recognition (NER), remains limited, even with larger pre-trained models. This performance gap stems from multiple factors, including tokenisation, dataset quality, and annotation inconsistencies. Existing studies often analyze these issues in isolation, failing to capture their joint effect on system behaviour and performance. We introduce DeformAr (Debugging and Evaluation Framework for Transformer-based NER Systems), a novel framework designed to investigate and explain the performance discrepancy between Arabic and English NER systems. DeformAr integrates a data extraction library and an interactive dashboard, supporting two modes of evaluation: cross-component analysis and behavioural analysis. The framework divides each language into dataset and model components to examine their interactions. The analysis proceeds in two stages. First, cross-component analysis provides systematic diagnostic measures across data and model subcomponents, addressing the "what," "how," and "why" behind observed discrepancies. The second stage applies behavioural analysis by combining interpretability techniques with token-level metrics, interactive visualisations, and representation space analysis. These stages enable a component-aware diagnostic process that detects model behaviours and explains them by linking them to underlying representational patterns and data factors. DeformAr is the first Arabic-specific, component-based interpretability tool, offering a crucial resource for advancing model analysis in under-resourced languages.





Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

AIHub

Imagine an autonomous car driving along a quiet suburban road when suddenly a dog runs onto the road. The system must brake hard and decide, within a fraction of a second, whether to swerve into oncoming traffic--where the other autonomous car might make space--to steer right and hit the roadside barrier, or to continue straight and injure the dog. The first two options risk only material damage; the last harms a living creature. Each choice is justifiable and involves trade-offs between safety, property and ethical concerns. However, today's autonomous systems are not designed to explicitly take such value-laden conflicts into account.


Readability Measures and Automatic Text Simplification: In the Search of a Construct

Cardon, Rémi, Doğruöz, A. Seza

arXiv.org Artificial Intelligence

Readability is a key concept in the current era of abundant written information. To help making texts more readable and make information more accessible to everyone, a line of researched aims at making texts accessible for their target audience: automatic text simplification (ATS). Lately, there have been studies on the correlations between automatic evaluation metrics in ATS and human judgment. However, the correlations between those two aspects and commonly available readability measures (such as readability formulas or linguistic features) have not been the focus of as much attention. In this work, we investigate the place of readability measures in ATS by complementing the existing studies on evaluation metrics and human judgment, on English. We first discuss the relationship between ATS and research in readability, then we report a study on correlations between readability measures and human judgment, and between readability measures and ATS evaluation metrics. We identify that in general, readability measures do not correlate well with automatic metrics and human judgment. We argue that as the three different angles from which simplification can be assessed tend to exhibit rather low correlations with one another, there is a need for a clear definition of the construct in ATS.


Simulating Misinformation Vulnerabilities With Agent Personas

Farr, David, Ng, Lynnette Hui Xian, Prochaska, Stephen, Cruickshank, Iain J., West, Jevin

arXiv.org Artificial Intelligence

School of Computer Science, Carnegie Mellon University, Pittsburgh, P A, USA ABSTRACT Disinformation campaigns can distort public perception and destabilize institutions. Understanding how different populations respond to information is crucial for designing effective interventions, yet real-world experimentation is impractical and ethically challenging. To address this, we develop an agent-based simulation using Large Language Models (LLMs) to model responses to misinformation. We construct agent personas spanning five professions and three mental schemas, and evaluate their reactions to news headlines. Our findings show that LLM-generated agents align closely with ground-truth labels and human predictions, supporting their use as proxies for studying information responses. We also find that mental schemas, more than professional background, influence how agents interpret misinformation. This work provides a validation of LLMs to be used as agents in an agent-based model of an information network for analyzing trust, polarization, and susceptibility to deceptive content in complex social systems. 1 INTRODUCTION Protection against foreign information campaigns and the ability to conduct effective information operations are critical to modern national security. In an era where the information domain can be leveraged as a battlefield, there is a need to maintain information advantage, defined as "the use, protection, and exploitation of information to achieve objectives more effectively than enemies and adversaries do" (U.S. Achieving and sustaining information advantage requires not only the ability to disseminate compelling narratives but also to detect, counter, and mitigate adversarial information operations.


Communication-Constrained Private Decentralized Online Personalized Mean Estimation

Yakimenka, Yauhen, Lin, Hsuan-Yin, Rosnes, Eirik, Kliewer, Jörg

arXiv.org Artificial Intelligence

Abstract--We consider the problem of communication-constrained collaborative personalized mean estimation under a privacy constraint in an environment of several agents continuously receiving data according to arbitrary unknown agent-specific distributions. A consensus-based algorithm is studied under the framework of differential privacy in order to protect each agent's data. We give a theoretical convergence analysis of the proposed consensus-based algorithm for any bounded unknown distributions on the agents' data, showing that collaboration provides faster convergence than a fully local approach where agents do not share data, under an oracle decision rule and under some restrictions on the privacy level and the agents' connectivity, which illustrates the benefit of private collaboration in an online setting under a communication restriction on the agents. The theoretical faster-than-local convergence guarantee is backed up by several numerical results. The interest in collaborative learning has grown considerably recently, fueled by prominent frameworks such as federated learning (FL) [1]-[3], which offers a partially decentralized approach, and fully decentralized methods like swarm learning [4].


Evaluating Simplification Algorithms for Interpretability of Time Series Classification

Håvardstun, Brigt, Marti-Perez, Felix, Ferri, Cèsar, Telle, Jan Arne

arXiv.org Artificial Intelligence

In this work, we introduce metrics to evaluate the use of simplified time series in the context of interpretability of a TSC -- a Time Series Classifier. Such simplifications are important because time series data, in contrast to text and image data, are not intuitively under- standable to humans. These metrics are related to the complexity of the simplifications -- how many segments they contain -- and to their loyalty -- how likely they are to maintain the classification of the original time series. We focus on simplifications that select a subset of the original data points, and show that these typically have high Shapley value, thereby aiding interpretability. We employ these metrics to experimentally evaluate four distinct simplification algorithms, across several TSC algorithms and across datasets of varying characteristics, from seasonal or stationary to short or long. We subsequently perform a human-grounded evaluation with forward simulation, that confirms also the practical utility of the introduced metrics to evaluate the use of simplifications in the context of interpretability of TSC. Our findings are summarized in a framework for deciding, for a given TSC, if the various simplifications are likely to aid in its interpretability.